The returns are weakly stationary with little instances of high correlations. p=4, q=4. It needs ARIMA model.
Code
ggAcf(returns_ups^2, na.action = na.pass)
Code
ggPacf(returns_ups^2, na.action = na.pass)
The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 5. Given this observation, it would be more appropriate to utilize an GARCH model.
The returns are weakly stationary with little instances of high correlations. p=1,4, q=1,4,5. It needs ARIMA model.
Code
ggAcf(returns_jbht^2, na.action = na.pass)
Code
ggPacf(returns_jbht^2, na.action = na.pass)
The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 5. Given this observation, it would be more appropriate to utilize an GARCH model.
p d q AIC BIC AICc
63 5 0 3 -20187.05 -20118.99 -20186.98
Code
temp[which.min(temp$BIC),]
p d q AIC BIC AICc
2 0 1 0 -20176.29 -20163.92 -20176.29
Code
temp[which.min(temp$AICc),]
p d q AIC BIC AICc
63 5 0 3 -20187.05 -20118.99 -20186.98
The lowest AIC model is ARIMA(5,0,3).
Code
# model diagnosticssarima(log.ups, 0,1,0)
initial value -4.226436
iter 1 value -4.226436
final value -4.226436
converged
initial value -4.226436
iter 1 value -4.226436
final value -4.226436
converged
<><><><><><><><><><><><><><>
Coefficients:
Estimate SE t.value p.value
constant 4e-04 2e-04 1.5201 0.1286
sigma^2 estimated as 0.0002132871 on 3593 degrees of freedom
AIC = -5.613881 AICc = -5.613881 BIC = -5.610438
Code
sarima(log.ups, 5,0,3)
initial value -0.791585
iter 2 value -4.046999
iter 3 value -4.050748
iter 4 value -4.218331
iter 5 value -4.226528
iter 6 value -4.227059
iter 7 value -4.227315
iter 8 value -4.227336
iter 9 value -4.227375
iter 10 value -4.227510
iter 11 value -4.227890
iter 12 value -4.228782
iter 13 value -4.228974
iter 14 value -4.229201
iter 15 value -4.229377
iter 16 value -4.229419
iter 17 value -4.229430
iter 18 value -4.229436
iter 19 value -4.229439
iter 20 value -4.229442
iter 20 value -4.229442
final value -4.229442
converged
initial value -4.226792
iter 2 value -4.226806
iter 3 value -4.226807
iter 4 value -4.226860
iter 5 value -4.226909
iter 6 value -4.227101
iter 7 value -4.227333
iter 8 value -4.227499
iter 9 value -4.227535
iter 10 value -4.227536
iter 11 value -4.227543
iter 12 value -4.227565
iter 13 value -4.227590
iter 14 value -4.227630
iter 15 value -4.227658
iter 16 value -4.227668
iter 17 value -4.227670
iter 18 value -4.227670
iter 19 value -4.227670
iter 20 value -4.227671
iter 21 value -4.227671
iter 21 value -4.227671
iter 21 value -4.227671
final value -4.227671
converged
<><><><><><><><><><><><><><>
Coefficients:
Estimate SE t.value p.value
ar1 0.6019 0.1131 5.3198 0.0000
ar2 0.0159 0.1589 0.1003 0.9201
ar3 -0.3337 0.1271 -2.6247 0.0087
ar4 0.6608 0.1295 5.1016 0.0000
ar5 0.0543 0.0178 3.0438 0.0024
ma1 0.3834 0.1124 3.4098 0.0007
ma2 0.3872 0.0926 4.1794 0.0000
ma3 0.7162 0.1213 5.9026 0.0000
xmean 4.4787 0.5251 8.5296 0.0000
sigma^2 estimated as 0.000212317 on 3586 degrees of freedom
AIC = -5.611901 AICc = -5.611888 BIC = -5.594691
According to the model diagnostics, ARIMA(0,1,0) is the better model.
Upon inspecting the Standardized Residuals plot of the model, it is evident that there is still significant variation or volatility remaining. Further modeling is required to address this issue.
p d q AIC BIC AICc
29 3 0 2 -19359.14 -19309.64 -19359.1
Code
temp[which.min(temp$BIC),]
p d q AIC BIC AICc
2 0 1 0 -19333.52 -19321.15 -19333.52
Code
temp[which.min(temp$AICc),]
p d q AIC BIC AICc
29 3 0 2 -19359.14 -19309.64 -19359.1
The lowest AIC model is ARIMA(3,0,2).
Code
# model diagnosticssarima(log.jbht, 0,1,0)
initial value -4.109189
iter 1 value -4.109189
final value -4.109189
converged
initial value -4.109189
iter 1 value -4.109189
final value -4.109189
converged
<><><><><><><><><><><><><><>
Coefficients:
Estimate SE t.value p.value
constant 5e-04 3e-04 1.8773 0.0606
sigma^2 estimated as 0.0002696519 on 3593 degrees of freedom
AIC = -5.379389 AICc = -5.379388 BIC = -5.375946
Code
sarima(log.jbht, 3,0,2)
initial value -0.618224
iter 2 value -0.671549
iter 3 value -0.827787
iter 4 value -1.743593
iter 5 value -2.579854
iter 6 value -3.611074
iter 7 value -3.775436
iter 8 value -3.907907
iter 9 value -4.012964
iter 10 value -4.077742
iter 11 value -4.092257
iter 12 value -4.104213
iter 13 value -4.109371
iter 14 value -4.109505
iter 15 value -4.109509
iter 16 value -4.109510
iter 17 value -4.109510
iter 18 value -4.109512
iter 19 value -4.109516
iter 20 value -4.109516
iter 21 value -4.109519
iter 22 value -4.109520
iter 23 value -4.109520
iter 24 value -4.109521
iter 25 value -4.109523
iter 26 value -4.109530
iter 27 value -4.109554
iter 28 value -4.110273
iter 29 value -4.110275
iter 30 value -4.110303
iter 31 value -4.110368
iter 32 value -4.110468
iter 33 value -4.110470
iter 34 value -4.110475
iter 35 value -4.110492
iter 36 value -4.110499
iter 37 value -4.110500
iter 38 value -4.110510
iter 39 value -4.110513
iter 40 value -4.110515
iter 41 value -4.110516
iter 42 value -4.110517
iter 43 value -4.110521
iter 44 value -4.110532
iter 45 value -4.110565
iter 46 value -4.110567
iter 47 value -4.110585
iter 48 value -4.110609
iter 49 value -4.110612
iter 50 value -4.110615
iter 51 value -4.110625
iter 52 value -4.110634
iter 53 value -4.110641
iter 54 value -4.110643
iter 55 value -4.110645
iter 56 value -4.110648
iter 57 value -4.110658
iter 58 value -4.110685
iter 59 value -4.110686
iter 60 value -4.110715
iter 61 value -4.110718
iter 62 value -4.110720
iter 63 value -4.110721
iter 64 value -4.110728
iter 65 value -4.110740
iter 66 value -4.110762
iter 67 value -4.110784
iter 68 value -4.110798
iter 69 value -4.110816
iter 70 value -4.110862
iter 71 value -4.110976
iter 72 value -4.111003
iter 73 value -4.111066
iter 74 value -4.111122
iter 75 value -4.111150
iter 76 value -4.111193
iter 77 value -4.111279
iter 78 value -4.111308
iter 79 value -4.111317
iter 80 value -4.111322
iter 81 value -4.111327
iter 82 value -4.111351
iter 83 value -4.111400
iter 84 value -4.111506
iter 85 value -4.111516
iter 86 value -4.111551
iter 87 value -4.111663
iter 88 value -4.111689
iter 89 value -4.111692
iter 90 value -4.111727
iter 91 value -4.111735
iter 92 value -4.111754
iter 93 value -4.111780
iter 94 value -4.111848
iter 95 value -4.111915
iter 96 value -4.111973
iter 97 value -4.111984
iter 97 value -4.111984
final value -4.111984
converged
initial value -4.110599
iter 2 value -4.110600
iter 3 value -4.110606
iter 4 value -4.110662
iter 5 value -4.110731
iter 6 value -4.110752
iter 7 value -4.110754
iter 8 value -4.110760
iter 9 value -4.110774
iter 10 value -4.110797
iter 11 value -4.110824
iter 12 value -4.110851
iter 13 value -4.110858
iter 14 value -4.110858
iter 15 value -4.110858
iter 16 value -4.110858
iter 17 value -4.110859
iter 18 value -4.110864
iter 19 value -4.110869
iter 20 value -4.110875
iter 21 value -4.110881
iter 22 value -4.110882
iter 23 value -4.110884
iter 24 value -4.110885
iter 25 value -4.110889
iter 26 value -4.110894
iter 27 value -4.110901
iter 28 value -4.110902
iter 29 value -4.110902
iter 30 value -4.110903
iter 31 value -4.110904
iter 32 value -4.110905
iter 33 value -4.110908
iter 34 value -4.110910
iter 35 value -4.110910
iter 36 value -4.110911
iter 37 value -4.110911
iter 38 value -4.110912
iter 39 value -4.110913
iter 40 value -4.110916
iter 41 value -4.110916
iter 42 value -4.110918
iter 43 value -4.110918
iter 43 value -4.110918
iter 43 value -4.110918
final value -4.110918
converged
<><><><><><><><><><><><><><>
Coefficients:
Estimate SE t.value p.value
ar1 -0.6577 0.0622 -10.5702 0
ar2 0.8291 0.0376 22.0489 0
ar3 0.8277 0.0734 11.2766 0
ma1 1.6252 0.0681 23.8518 0
ma2 0.7850 0.0799 9.8280 0
xmean 4.4856 0.5794 7.7418 0
sigma^2 estimated as 0.0002681614 on 3589 degrees of freedom
AIC = -5.380064 AICc = -5.380058 BIC = -5.368017
According to the model diagnostics, ARIMA(3,0,2) is the better model.
Upon inspecting the Standardized Residuals plot of the model, it is evident that there is still significant variation or volatility remaining. Further modeling is required to address this issue.
Series: log.ups
ARIMA(0,1,0)
sigma^2 = 0.0002134: log likelihood = 10088.98
AIC=-20175.97 AICc=-20175.97 BIC=-20169.78
Training set error measures:
ME RMSE MAE MPE MAPE
Training set 0.0003720833 0.01460716 0.009704563 0.008215162 0.2164644
MASE ACF1
Training set 0.9998258 -0.01549334
Code
res.ups <- fit.ups$resggtsdisplay(res.ups^2)
The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 6. Given this observation, it would be more appropriate to utilize an GARCH model.
Code
model <-list() ## set countercc <-1for (p in1:6) {for (q in1:6) {model[[cc]] <-garch(res.ups,order=c(q,p),trace=F)cc <- cc +1}} ## get AIC values for model evaluationGARCH_AIC <-sapply(model, AIC) ## model with lowest AIC is the bestwhich(GARCH_AIC ==min(GARCH_AIC))
Series: log.jbht
ARIMA(3,0,2) with non-zero mean
Coefficients:
ar1 ar2 ar3 ma1 ma2 mean
-0.6577 0.8291 0.8277 1.6252 0.7850 4.4856
s.e. 0.0622 0.0376 0.0734 0.0681 0.0799 0.5794
sigma^2 = 0.0002686: log likelihood = 9677.67
AIC=-19341.33 AICc=-19341.3 BIC=-19298.02
Training set error measures:
ME RMSE MAE MPE MAPE MASE
Training set 0.0005092893 0.01637563 0.01181002 0.01088333 0.2686759 0.9997226
ACF1
Training set -0.003501952
Code
res.jbht <- fit.jbht$resggtsdisplay(res.jbht^2)
The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 6. Given this observation, it would be more appropriate to utilize an GARCH model.
Code
model <-list() ## set countercc <-1for (p in1:6) {for (q in1:6) {model[[cc]] <-garch(res.jbht,order=c(q,p),trace=F)cc <- cc +1}} ## get AIC values for model evaluationGARCH_AIC <-sapply(model, AIC) ## model with lowest AIC is the bestwhich(GARCH_AIC ==min(GARCH_AIC))
Series: log.ups
ARIMA(0,1,0)
sigma^2 = 0.0002134: log likelihood = 10088.98
AIC=-20175.97 AICc=-20175.97 BIC=-20169.78
Training set error measures:
ME RMSE MAE MPE MAPE
Training set 0.0003720833 0.01460716 0.009704563 0.008215162 0.2164644
MASE ACF1
Training set 0.9998258 -0.01549334
Code
summary(garchFit(~garch(1,4), res.ups,trace = F))
Title:
GARCH Modelling
Call:
garchFit(formula = ~garch(1, 4), data = res.ups, trace = F)
Mean and Variance Equation:
data ~ garch(1, 4)
<environment: 0x14d0f9820>
[data = res.ups]
Conditional Distribution:
norm
Coefficient(s):
mu omega alpha1 beta1 beta2 beta3
3.3270e-04 1.9563e-06 6.1465e-02 1.5440e-01 1.0000e-08 2.3067e-01
beta4
5.4621e-01
Std. Errors:
based on Hessian
Error Analysis:
Estimate Std. Error t value Pr(>|t|)
mu 3.327e-04 2.047e-04 1.626 0.104025
omega 1.956e-06 5.724e-07 3.418 0.000631 ***
alpha1 6.146e-02 9.267e-03 6.633 3.30e-11 ***
beta1 1.544e-01 1.166e-01 1.324 0.185584
beta2 1.000e-08 1.009e-01 0.000 1.000000
beta3 2.307e-01 1.556e-01 1.483 0.138174
beta4 5.462e-01 1.098e-01 4.974 6.55e-07 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Log Likelihood:
10390.55 normalized: 2.890279
Description:
Wed Apr 17 23:41:59 2024 by user:
Standardised Residuals Tests:
Statistic p-Value
Jarque-Bera Test R Chi^2 1.738593e+04 0.0000000
Shapiro-Wilk Test R W 9.083531e-01 0.0000000
Ljung-Box Test R Q(10) 1.293125e+01 0.2275411
Ljung-Box Test R Q(15) 2.218871e+01 0.1029505
Ljung-Box Test R Q(20) 2.549020e+01 0.1833164
Ljung-Box Test R^2 Q(10) 5.152022e+00 0.8807940
Ljung-Box Test R^2 Q(15) 6.230637e+00 0.9756077
Ljung-Box Test R^2 Q(20) 7.638312e+00 0.9940030
LM Arch Test R TR^2 5.492463e+00 0.9394799
Information Criterion Statistics:
AIC BIC SIC HQIC
-5.776664 -5.764617 -5.776672 -5.772370
Code
checkresiduals(garch(res.jbht, order =c(4,1),trace = F))
Ljung-Box test
data: Residuals
Q* = 6.6183, df = 10, p-value = 0.7609
Model df: 0. Total lags used: 10
The model’s residual plots generally look satisfactory, with only a few notable lags in the ACF plots. The AIC values are relatively low, indicating a good fit. However, it’s worth noting that not all coefficients for the GARCH model were statistically significant, suggesting that some correlations may not be entirely captured by the model. Additionally, the Ljung-Box test results show all p-values above 0.05, indicating that there may not be significant autocorrelation remaining in the residuals.
Best model: ARIMA(3,0,2)+GARCH(1,1)
Code
# returns_jbhtsummary(fit.jbht)
Series: log.jbht
ARIMA(3,0,2) with non-zero mean
Coefficients:
ar1 ar2 ar3 ma1 ma2 mean
-0.6577 0.8291 0.8277 1.6252 0.7850 4.4856
s.e. 0.0622 0.0376 0.0734 0.0681 0.0799 0.5794
sigma^2 = 0.0002686: log likelihood = 9677.67
AIC=-19341.33 AICc=-19341.3 BIC=-19298.02
Training set error measures:
ME RMSE MAE MPE MAPE MASE
Training set 0.0005092893 0.01637563 0.01181002 0.01088333 0.2686759 0.9997226
ACF1
Training set -0.003501952
Title:
GARCH Modelling
Call:
garchFit(formula = ~garch(1, 1), data = res.jbht, trace = F)
Mean and Variance Equation:
data ~ garch(1, 1)
<environment: 0x13ce282c0>
[data = res.jbht]
Conditional Distribution:
norm
Coefficient(s):
mu omega alpha1 beta1
4.8548e-04 5.6928e-06 4.4634e-02 9.3358e-01
Std. Errors:
based on Hessian
Error Analysis:
Estimate Std. Error t value Pr(>|t|)
mu 4.855e-04 2.467e-04 1.968 0.0491 *
omega 5.693e-06 1.401e-06 4.063 4.85e-05 ***
alpha1 4.463e-02 6.566e-03 6.798 1.06e-11 ***
beta1 9.336e-01 1.034e-02 90.276 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Log Likelihood:
9875.473 normalized: 2.747002
Description:
Wed Apr 17 23:42:00 2024 by user:
Standardised Residuals Tests:
Statistic p-Value
Jarque-Bera Test R Chi^2 1601.3904450 0.0000000
Shapiro-Wilk Test R W 0.9726164 0.0000000
Ljung-Box Test R Q(10) 6.7837633 0.7456891
Ljung-Box Test R Q(15) 10.7783010 0.7681461
Ljung-Box Test R Q(20) 12.3203332 0.9046092
Ljung-Box Test R^2 Q(10) 4.7290843 0.9085253
Ljung-Box Test R^2 Q(15) 11.0855072 0.7465097
Ljung-Box Test R^2 Q(20) 17.3735653 0.6286002
LM Arch Test R TR^2 10.6790616 0.5566048
Information Criterion Statistics:
AIC BIC SIC HQIC
-5.491779 -5.484895 -5.491782 -5.489326
Code
checkresiduals(garch(res.jbht, order =c(1,1),trace = F))
Ljung-Box test
data: Residuals
Q* = 6.6788, df = 10, p-value = 0.7554
Model df: 0. Total lags used: 10
The model’s residual plots generally look satisfactory, with only a few notable lags in the ACF plots. The AIC values are relatively low, indicating a good fit. However, it’s worth noting that not all coefficients for the ARIMA model were statistically significant, suggesting that some correlations may not be entirely captured by the model. Additionally, the Ljung-Box test results show all p-values above 0.05, indicating that there may not be significant autocorrelation remaining in the residuals.